150 research outputs found

    Strategies for real time reservoir management

    Get PDF
    Real–time reservoir management is developed to manage a shrinking labor force and rising demand on energy supply. This dissertation seeks good strategies for real–time reservoir management. First, two simulator–independent optimization algorithms are investigated: ensemble–based optimization (EnOpt) and bound optimization by quadratic approximation (BOBYQA). Multiscale regularization is applied to both to find appropriate frequencies for well control adjustment. Second, two gathered EnKF methods are proposed to save computational cost and reduce sampling error: gathered EnKF with a fixed gather size and adaptively gathered EnKF. Finally, oil price uncertainty is forecasted and quantified with three price forecasting models: conventional forecasting, bootstrap forecasting and sequential Gaussian simulation forecasting. The relative effect of oil price and its volatility on the optimization strategies are investigated. A number of key findings of this dissertation are: (a) if multiscale regularization is not used, EnOpt converges to a higher net present value (NPV) than BOBYQA—even though BOBYQA uses second order Hessian information whereas EnOpt uses first order gradients. BOBYQA performs comparably only if multiscale regularization is used. Multiscale regularization results in a higher optimized NPV with simpler well control strategies and converges in fewer iterations; (b) gathering observations not only reduces the sampling errors but also saves significant amount of computational cost. In addition, adaptively gathered EnKF is superior to gathered EnKF with a fixed gather size when the prior ensemble mean is not near the truth; (c) it is shown that a good oil price forecasting model can improve NPV by more than four percent, and (d) instability in oil prices also causes fluctuation in optimized well controls

    MDPFuzz: Testing Models Solving Markov Decision Processes

    Full text link
    The Markov decision process (MDP) provides a mathematical framework for modeling sequential decision-making problems, many of which are crucial to security and safety, such as autonomous driving and robot control. The rapid development of artificial intelligence research has created efficient methods for solving MDPs, such as deep neural networks (DNNs), reinforcement learning (RL), and imitation learning (IL). However, these popular models for solving MDPs are neither thoroughly tested nor rigorously reliable. We present MDPFuzzer, the first blackbox fuzz testing framework for models solving MDPs. MDPFuzzer forms testing oracles by checking whether the target model enters abnormal and dangerous states. During fuzzing, MDPFuzzer decides which mutated state to retain by measuring if it can reduce cumulative rewards or form a new state sequence. We design efficient techniques to quantify the "freshness" of a state sequence using Gaussian mixture models (GMMs) and dynamic expectation-maximization (DynEM). We also prioritize states with high potential of revealing crashes by estimating the local sensitivity of target models over states. MDPFuzzer is evaluated on five state-of-the-art models for solving MDPs, including supervised DNN, RL, IL, and multi-agent RL. Our evaluation includes scenarios of autonomous driving, aircraft collision avoidance, and two games that are often used to benchmark RL. During a 12-hour run, we find over 80 crash-triggering state sequences on each model. We show inspiring findings that crash-triggering states, though look normal, induce distinct neuron activation patterns compared with normal states. We further develop an abnormal behavior detector to harden all the evaluated models and repair them with the findings of MDPFuzzer to significantly enhance their robustness without sacrificing accuracy

    Enhancing Deep Neural Networks Testing by Traversing Data Manifold

    Full text link
    We develop DEEPTRAVERSAL, a feedback-driven framework to test DNNs. DEEPTRAVERSAL first launches an offline phase to map media data of various forms to manifolds. Then, in its online testing phase, DEEPTRAVERSAL traverses the prepared manifold space to maximize DNN coverage criteria and trigger prediction errors. In our evaluation, DNNs executing various tasks (e.g., classification, self-driving, machine translation) and media data of different types (image, audio, text) were used. DEEPTRAVERSAL exhibits better performance than prior methods with respect to popular DNN coverage criteria and it can discover a larger number and higher quality of error-triggering inputs. The tested DNN models, after being repaired with findings of DEEPTRAVERSAL, achieve better accurac

    Data Management with Flexible and Extensible Data Schema in CLANS

    Get PDF
    AbstractData Management plays an essential role in both research and industrial areas, especially for the fields need text processing, like business domain. Corporate Leaders Analytics and Network System (CLANS) is a system designed to identify and analyze social networks among corporations and business elites. It targets to tackle some of difficult problems such as natural language processing, network construction, relationship mining, and it requires high-quality management of data. For data management, we propose a novel approach by integrating the essential XML files and auxiliary databases, with a flexible and extensible data schema. This data schema is the kernel of our data management. It achieves plenty of superiorities, namely, separability, scalability, traceability, distinguishability, version control and maintainability. In this paper, we specifically illustrate the data schema as well as the management approach in CLANS

    Decompiling x86 Deep Neural Network Executables

    Full text link
    Due to their widespread use on heterogeneous hardware devices, deep learning (DL) models are compiled into executables by DL compilers to fully leverage low-level hardware primitives. This approach allows DL computations to be undertaken at low cost across a variety of computing platforms, including CPUs, GPUs, and various hardware accelerators. We present BTD (Bin to DNN), a decompiler for deep neural network (DNN) executables. BTD takes DNN executables and outputs full model specifications, including types of DNN operators, network topology, dimensions, and parameters that are (nearly) identical to those of the input models. BTD delivers a practical framework to process DNN executables compiled by different DL compilers and with full optimizations enabled on x86 platforms. It employs learning-based techniques to infer DNN operators, dynamic analysis to reveal network architectures, and symbolic execution to facilitate inferring dimensions and parameters of DNN operators. Our evaluation reveals that BTD enables accurate recovery of full specifications of complex DNNs with millions of parameters (e.g., ResNet). The recovered DNN specifications can be re-compiled into a new DNN executable exhibiting identical behavior to the input executable. We show that BTD can boost two representative attacks, adversarial example generation and knowledge stealing, against DNN executables. We also demonstrate cross-architecture legacy code reuse using BTD, and envision BTD being used for other critical downstream tasks like DNN security hardening and patching.Comment: The extended version of a paper to appear in the Proceedings of the 32nd USENIX Security Symposium, 2023, (USENIX Security '23), 25 page

    Modeling the role of p53 pulses in DNA damage- induced cell death decision

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The tumor suppressor p53 plays pivotal roles in tumorigenesis suppression. Although oscillations of p53 have been extensively studied, the mechanism of p53 pulses and their physiological roles in DNA damage response remain unclear.</p> <p>Results</p> <p>To address these questions we presented an integrated model in which Ataxia-Telangiectasia Mutated (ATM) activation and p53 oscillation were incorporated with downstream apoptotic events, particularly the interplays between Bcl-2 family proteins. We first reproduced digital oscillation of p53 as the response of normal cells to DNA damage. Subsequent modeling in mutant cells showed that high basal DNA damage is a plausible cause for sustained p53 pulses observed in tumor cells. Further computational analyses indicated that p53-dependent PUMA accumulation and the PUMA-controlled Bax activation switch might play pivotal roles to count p53 pulses and thus decide the cell fate.</p> <p>Conclusion</p> <p>The high levels of basal DNA damage are responsible for generating sustained pulses of p53 in the tumor cells. Meanwhile, the Bax activation switch can count p53 pulses through PUMA accumulation and transfer it into death signal. Our modeling provides a plausible mechanism about how cells generate and orchestrate p53 pulses to tip the balance between survival and death.</p

    Unveiling Single-Bit-Flip Attacks on DNN Executables

    Full text link
    Recent research has shown that bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs) via DRAM Rowhammer exploitations. Existing attacks are primarily launched over high-level DNN frameworks like PyTorch and flip bits in model weight files. Nevertheless, DNNs are frequently compiled into low-level executables by deep learning (DL) compilers to fully leverage low-level hardware primitives. The compiled code is usually high-speed and manifests dramatically distinct execution paradigms from high-level DNN frameworks. In this paper, we launch the first systematic study on the attack surface of BFA specifically for DNN executables compiled by DL compilers. We design an automated search tool to identify vulnerable bits in DNN executables and identify practical attack vectors that exploit the model structure in DNN executables with BFAs (whereas prior works make likely strong assumptions to attack model weights). DNN executables appear more "opaque" than models in high-level DNN frameworks. Nevertheless, we find that DNN executables contain extensive, severe (e.g., single-bit flip), and transferrable attack surfaces that are not present in high-level DNN models and can be exploited to deplete full model intelligence and control output labels. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains.Comment: Fix typ

    Haematology specimen acceptability: a national survey in Chinese laboratories

    Get PDF
    Introduction: Specimen adequacy is a crucial preanalytical factor affecting accuracy and usefulness of test result. The aim of this study was to determine the frequency and reasons for rejected haematology specimens, preanalytical variables which may affect specimen quality, and consequences of rejection, and provide suggestions on monitoring quality indicators as to obtain a quality improvement. Materials and methods: A cross-sectional survey was conducted and a questionnaire was sent to 1586 laboratories. Participants were asked to provide general information about institution and practices on specimen management and record rejections and reasons for rejection from 1st to 31st July. Results: A total survey response rate was 56% (890/1586). Of 10,181,036 tubes received during the data collection period, 11,447 (0.11%) were rejected, and the sigma (σ) was 4.6. The main reason for unacceptable specimens was clotted specimen (57%). Rejected specimens were related to source department, container type, container material type, transportation method and phlebotomy personnel. The recollection of 84% of the rejected specimens was required. The median specimen processing delay in inpatient, outpatient and emergency department were 81.0 minutes, 57.0 minutes and 43.3 minutes, respectively. Conclusions: Overall, rejection rate was a slightly lower than previously published data. In order to achieve a better quality in the preanalytical phase, haematology laboratories in China should pay more attention on training for phlebotomy and sample transportation, identify main reasons for clotted specimen and take effective measures. The platform in the study will be helpful for long-term monitoring, but simplification and modification should be introduced in the following investigation

    Computer-Aided Drug Design of Capuramycin Analogues as Anti-Tuberculosis Antibiotics by 3D-QSAR and Molecular Docking

    Get PDF
    Capuramycin and a few semisynthetic derivatives have shown potential as anti-tuberculosis antibiotics.To understand their mechanism of action and structureactivity relationships a 3D-QSAR and molecular docking studies were performed. A set of 52 capuramycin derivatives for the training set and 13 for the validation set was used. A highly predictive MFA model was obtained with crossvalidated q2 of 0.398, and non-cross validated partial least-squares (PLS) analysis showed a conventional r2 of 0.976 and r2pred of 0.839. The model has an excellent predictive ability. Combining the 3D-QSAR and molecular docking studies, a number of new capuramycin analogs with predicted improved activities were designed. Biological activity tests of one analog showed useful antibiotic activity against Mycobacterium smegmatis MC2 155 and Mycobacterium tuberculosis H37Rv. Computer-aided molecular docking and 3D-QSAR can improve the design of new capuramycin antimycobacterial antibiotics
    corecore